In this paper, we take a significant step towards real-world applicability of monocular neural avatar reconstruction by contributing InstantAvatar, a system that can reconstruct human avatars from a monocular video within seconds, and these avatars can be animated and rendered at an interactive rate. To achieve this efficiency we propose a carefully designed and engineered system, that leverages emerging acceleration structures for neural fields, in combination with an efficient empty space-skipping strategy for dynamic scenes. We also contribute an efficient implementation that we will make available for research purposes. Compared to existing methods, InstantAvatar converges 130x faster and can be trained in minutes instead of hours. It achieves comparable or even better reconstruction quality and novel pose synthesis results. When given the same time budget, our method significantly outperforms SoTA methods. InstantAvatar can yield acceptable visual quality in as little as 10 seconds training time.
translated by 谷歌翻译
We present HARP (HAnd Reconstruction and Personalization), a personalized hand avatar creation approach that takes a short monocular RGB video of a human hand as input and reconstructs a faithful hand avatar exhibiting a high-fidelity appearance and geometry. In contrast to the major trend of neural implicit representations, HARP models a hand with a mesh-based parametric hand model, a vertex displacement map, a normal map, and an albedo without any neural components. As validated by our experiments, the explicit nature of our representation enables a truly scalable, robust, and efficient approach to hand avatar creation. HARP is optimized via gradient descent from a short sequence captured by a hand-held mobile phone and can be directly used in AR/VR applications with real-time rendering capability. To enable this, we carefully design and implement a shadow-aware differentiable rendering scheme that is robust to high degree articulations and self-shadowing regularly present in hand motion sequences, as well as challenging lighting conditions. It also generalizes to unseen poses and novel viewpoints, producing photo-realistic renderings of hand animations performing highly-articulated motions. Furthermore, the learned HARP representation can be used for improving 3D hand pose estimation quality in challenging viewpoints. The key advantages of HARP are validated by the in-depth analyses on appearance reconstruction, novel-view and novel pose synthesis, and 3D hand pose refinement. It is an AR/VR-ready personalized hand representation that shows superior fidelity and scalability.
translated by 谷歌翻译
The ability to create realistic, animatable and relightable head avatars from casual video sequences would open up wide ranging applications in communication and entertainment. Current methods either build on explicit 3D morphable meshes (3DMM) or exploit neural implicit representations. The former are limited by fixed topology, while the latter are non-trivial to deform and inefficient to render. Furthermore, existing approaches entangle lighting in the color estimation, thus they are limited in re-rendering the avatar in new environments. In contrast, we propose PointAvatar, a deformable point-based representation that disentangles the source color into intrinsic albedo and normal-dependent shading. We demonstrate that PointAvatar bridges the gap between existing mesh- and implicit representations, combining high-quality geometry and appearance with topological flexibility, ease of deformation and rendering efficiency. We show that our method is able to generate animatable 3D avatars using monocular videos from multiple sources including hand-held smartphones, laptop webcams and internet videos, achieving state-of-the-art quality in challenging cases where previous methods fail, e.g., thin hair strands, while being significantly more efficient in training than competing methods.
translated by 谷歌翻译
We propose a method that leverages graph neural networks, multi-level message passing, and unsupervised training to enable real-time prediction of realistic clothing dynamics. Whereas existing methods based on linear blend skinning must be trained for specific garments, our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing. Our method furthermore handles changes in topology (e.g., garments with buttons or zippers) and material properties at inference time. As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes while preserving local detail. We empirically show that our method outperforms strong baselines quantitatively and that its results are perceived as more realistic than state-of-the-art methods.
translated by 谷歌翻译
We propose GazeNeRF, a 3D-aware method for the task of gaze redirection. Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results. Instead, we build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion. Our method leverages recent advancements in conditional image-based neural radiance fields and proposes a two-stream architecture that predicts volumetric features for the face and eye regions separately. Rigidly transforming the eye features via a 3D rotation matrix provides fine-grained control over the desired gaze angle. The final, redirected image is then attained via differentiable volume compositing. Our experiments show that this architecture outperforms naively conditioned NeRF baselines as well as previous state-of-the-art 2D gaze redirection methods in terms of redirection accuracy and identity preservation.
translated by 谷歌翻译
We present Depth-aware Image-based NEural Radiance fields (DINER). Given a sparse set of RGB input views, we predict depth and feature maps to guide the reconstruction of a volumetric scene representation that allows us to render 3D objects under novel views. Specifically, we propose novel techniques to incorporate depth information into feature fusion and efficient scene sampling. In comparison to the previous state of the art, DINER achieves higher synthesis quality and can process input views with greater disparity. This allows us to capture scenes more completely without changing capturing hardware requirements and ultimately enables larger viewpoint changes during novel view synthesis. We evaluate our method by synthesizing novel views, both for human heads and for general objects, and observe significantly improved qualitative results and increased perceptual metrics compared to the previous state of the art. The code will be made publicly available for research purposes.
translated by 谷歌翻译
Neural fields have revolutionized the area of 3D reconstruction and novel view synthesis of rigid scenes. A key challenge in making such methods applicable to articulated objects, such as the human body, is to model the deformation of 3D locations between the rest pose (a canonical space) and the deformed space. We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space via iterative root finding. Fast-SNARF is a drop-in replacement in functionality to our previous work, SNARF, while significantly improving its computational efficiency. We contribute several algorithmic and implementation improvements over SNARF, yielding a speed-up of $150\times$. These improvements include voxel-based correspondence search, pre-computing the linear blend skinning function, and an efficient software implementation with CUDA kernels. Fast-SNARF enables efficient and simultaneous optimization of shape and skinning weights given deformed observations without correspondences (e.g. 3D meshes). Because learning of deformation maps is a crucial component in many 3D human avatar methods and since Fast-SNARF provides a computationally efficient solution, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
translated by 谷歌翻译
我们提出了一种从图像中推断人类对象相互作用的不同3D模型的方法。考虑到人类如何与单个2D图像中复杂场景中的对象相互作用的推理是一项具有挑战性的任务,鉴于由于通过投影而导致信息丢失引起的歧义。此外,建模3D相互作用需要对各种对象类别和交互类型的概括能力。我们提出了一种对相互作用的动作条件建模,使我们能够在接触区域或3D场景几何形状上推断人类和物体的不同3D布置。我们的方法从大语言模型(例如GPT-3)中提取高级常识性知识,并将其应用于对人类对象相互作用的3D推理。我们的关键见解是从大语言模型中提取的先验可以帮助从纹理提示中推理人类对象联系人。我们定量评估大型人类对象交互数据集上推断的3D模型,并显示我们的方法如何导致更好的3D重建。我们进一步评估方法对真实图像的有效性,并证明其对互动类型和对象类别的普遍性。
translated by 谷歌翻译
我们介绍了TemPCLR,这是一种针对3D手重建的结构化回归任务的新的时代对比学习方法。与以前的手部姿势估计方法相抵触方法不同,我们的框架考虑了其增强方案中的时间一致性,并说明了沿时间方向的手部姿势的差异。我们的数据驱动方法利用了未标记的视频和标准CNN,而无需依赖合成数据,伪标签或专业体系结构。我们的方法在HO-3D和Freihand数据集中分别将全面监督的手部重建方法的性能提高了15.9%和7.6%,从而确立了新的最先进的性能。最后,我们证明了我们的方法会随着时间的推移产生更平滑的手部重建,并且与以前的最新作品相比,对重型的闭塞更为强大,我们在定量和定性上表现出来。我们的代码和模型将在https://eth-ait.github.io/tempclr上找到。
translated by 谷歌翻译
创建高质量的动画和可重新可靠的3D人体化身的独特挑战是对人的眼睛进行建模。合成眼睛的挑战是多重的,因为它需要1)适当的表示眼和眼周区域的适当表示,以进行连贯的视点合成,能够表示弥漫性,折射和高度反射表面,2)2)脱离皮肤和眼睛外观这样的照明使其可以在新的照明条件下呈现,3)捕获眼球运动和周围皮肤的变形以使重新注视。传统上,这些挑战需要使用昂贵且繁琐的捕获设置来获得高质量的结果,即使那样,整体上的眼睛区域建模仍然难以捉摸。我们提出了一种新颖的几何形状和外观表示形式,该形式仅使用一组稀疏的灯光和摄像头,可以捕获高保真的捕获和感性动画,观察眼睛区域的综合和重新定位。我们的杂种表示将眼球的显式参数表面模型与眼周区域和眼内部的隐式变形体积表示结合在一起。这种新颖的混合模型旨在解决具有挑战性的面部面积的各个部分 - 明确的眼球表面允许在角膜处建模折射和高频镜面反射,而隐性表示非常适合通过模拟低频皮肤反射。球形谐波可以代表非表面结构,例如头发或弥漫性体积物体,这两者都是显式表面模型的挑战。我们表明,对于高分辨率的眼睛特写,我们的模型可以从看不见的照明条件下的新颖观点中综合高保真动画的目光。
translated by 谷歌翻译